Sorry, you need to enable JavaScript to visit this website.
Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.

Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.

The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Develop Sector-Specific Accountability With Cross-Sectoral Horizontal Capacity

March 27, 2024
Earned Trust through AI System Assurance

The application of sector-specific laws, rules, and enforcement obligations are being considered by government agencies and courts in the context of AI systems. Regulatory agencies are determining their powers to evaluate and demand information about some AI systems from the earliest stages of design.45 Commenters thought that additional accountability mechanisms should be tailored to the sector in which the system is deployed.46 AI deployment in sectors such as health, education, employment, finance, and transportation involve particular risks, the identification and mitigation of which often requires sector-specific knowledge. At the same time, there is risk in every sector, and cross-sectoral risks are present in both foundation models and specialized AI systems deployed in unintended contexts. Not every sectoral oversight body currently has sufficient AI sociotechnical expertise to define and implement accountability measures in all instances. The record surfaces interest in developing federal governmental capacity to address AI system impacts and coordinate governmental responses across sectors.47

We think it is likely that agencies will need additional capacities and possibly authorities to enable and require AI accountability. The body or bodies with cross-sectoral capacity might provide technical and legal support to sectoral regulators, as well as exercise other responsibilities related to AI accountability. This combination of sectoral and cross-sectoral capacities would facilitate the “baseline plus” approach to AI assurance practices described in the Calibrate Accountability Inputs To Risk Levels section.

 

 


45 See, e.g., supra note 11.

46 See, e.g., MITRE Comment at 17 (“The U.S. should rely on existing sector-specific regulators, equipping them to address new AI-related regulatory needs.”); HR Policy Association (HRPA) Comment at 4 (policymakers should “align, when possible, any new guidelines or standards for AI with existing government policies and commonly adopted employer best practices”); Jonhson & Johnson Comment at 2 (recommending “regulatory approaches to AI that are contextual, proportional and use-case specific”); SIFMA Comment at 5 (supporting a “flexible, and principles-based approach to third-party AI risk management, with the applicable sectoral regulators providing additional specific requirements as needed” similar to cybersecurity and pointing to NYDFS Part 500.11(a) as instructive); Morningstar, Inc. Comment at 1-3 (financial regulations apply to AI systems); Intel Comment at 3 (identifying existing sectoral laws that apply to AI harms); Ernst and Young Comment at 11 (uniformity of accountability requirements might not be practical across sectors or even within the same sector); see also, e.g., Eric Schmidt Comment (arguing in an individual comment that “AI accountability should depend on business sector.”).

47 See, e.g., Google DeepMind Comment at 3 (regarding “hub-and-spoke” model of AI regulation, with sectoral regulators overseeing AI implementation with horizontal guidance from a central agency like NIST); Boston University and University of Chicago Researchers Comment at 3 (to enable existing sectoral authorities “to work most effectively and to ensure attention to generalizable risks of AI, we recommend establishment of a meta-agency with broad AI-related expertise (both technical and legal) which would develop baseline regulations regarding the general safety of AI systems, set standards, and enable review for compliance with substantive law, while collaborating with and lending its expertise to other agencies and lawmakers as they consider the impact of AI systems on their regulatory jurisdiction”); Credo AI at 5 (recommending that government “establish dedicated oversight of the procurement, development, and use of AI. . . . [and] consider the creation of a new independent Federal agency or a Cabinet-level position with oversight authority of AI systems.”); USTelecom Comment at 6 (“When individuals see that AI systems in different sectors are held to the same expectations, it assures them that adequate safeguards are in place to protect their rights and well-being, regardless of the company deploying AI.”); Salesforce Comment at 9 (AI rules should have a strong degree of horizontal consistency while recognizing that “some sectoral use cases will require different treatment based on the underlying activity.”); Center for American Progress (CAP) Comment at 12-13, 20 (highlighting the value of a distinct government body); Microsoft, Governing AI: A Blueprint for the Future (May 25, 2023) [hereinafter “Governing AI”], at 20-21 (endorsing a new regulator to implement an AI licensing regime for foundation models); Public Knowledge Comment at 2 (“We prescribe a hybrid approach of reliance on our sector specific regulators, already deeply embedded in the domains that matter to us most, to avert immediate and anticipated harms, while also cultivating new expertise with a centralized AI regulator that can adapt with the technology and provide a broader view of the full ecosystem.”); The Future Society Comment at 13 (“We are concerned that a lack of horizontal regulation in the US could perpetuate a regulatory vacuum and ‘race-to-the-bottom’ dynamics among [general-purpose AI system] developers, as they increasingly develop technologies that can pose risks to public health, safety, and welfare in an unregulated environment.”); see also The National Security Commission on Artificial Intelligence, Final Report (2021), Chapter 9 (proposing the creation of a new “Technology Competitiveness Council.”).